Latest Microsoft Dynamics 365 Blogs | CloudFronts

Exposing Plugins as Bound Actions for Power Automate Flows: A Practical Procedure for Efficient Record Processing, involving several records.

In complex business processes, like calculating commissions or validating data across multiple records, applying the same logic repeatedly in a Power Automate flow can quickly become inefficient and difficult to maintain. A more scalable approach is to encapsulate the logic in a Dataverse plugin, expose it as a bound action, and then call this action from a flow. This method centralizes business rules, reduces redundancy, and improves maintainability. In this post, we’ll walk through the steps to implement this approach and examine its advantages over applying the same logic directly within a flow for each individual record. We’ll illustrate this with a practical example from a Houston-based technology consulting and cybersecurity services firm that specializes in modern digital transformation and enterprise security solutions. Flow Diagram Step 1: Create the PluginThe first step is to write a plugin that contains the logic you want to apply to each record. Example: DuplicateCommissionsCounter Step 2: Expose the Plugin as a Bound ActionInstead of running plugin logic manually for each record, you can register it as a bound action in Dataverse. Procedure: E.g. 2. Attach your plugin to this action. Outcome: This exposes your plugin logic as a reusable, callable bound action. Any process or flow can now invoke it for a specific invoice record. Step 3: Use Power Automate to Call the Bound ActionOnce the plugin is exposed, you can loop through multiple records in a flow and call the action. Procedure in Power Automate: This approach ensures that all complex logic resides in the plugin, while the flow orchestrates which records need processing. Advantages Over Logic Directly in the Flow To conclude, exposing plugins as bound actions is a robust, maintainable way to apply complex logic across multiple records in Dataverse. It allows Power Automate flows to focus on orchestration rather than logic execution, leading to cleaner, faster, and easier-to-manage solutions. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

Renewing SSL Certificates in Dynamics 365 Finance and Operations

Managing secure operations in Microsoft Dynamics 365 Finance and Operations (D365FO) is crucial for enterprise-level environments. While working in a Dev environment, I recently encountered an issue while posting the packing slip for a Sales Order. After troubleshooting, I identified the problem was related to expired SSL certificates in a cloud-hosted environment. SSL certificates in D365FO environments remain valid for one year. To maintain security and avoid disruptions, it’s necessary to renew these certificates regularly-a process called credential rotation, managed via the Lifecycle Services (LCS) portal. In this blog, I’ll guide you step-by-step on how to resolve this issue through SSL certificate rotation. Why SSL Certificate Rotation is Important When deploying Dynamics 365 Finance Operations as a cloud-hosted environment, SSL certificates are used to encrypt data and ensure secure communication between servers. Expired certificates can disrupt functionality. Regular rotation of credentials is a best practice to maintain smooth operations and robust cybersecurity. Step-by-Step Process to Rotate SSL Certificates in Dynamics 365 Finance & Operations Here’s how you can resolve the issue and renew SSL certificates in your environment: Step 1: Log into the LCS Environment Step 2: Navigate to the Implementation Project Step 3: Initiate Credential Rotation Step 4: Rotate SSL Secrets Certificates Step 5: Wait for the Process to Complete Step 6: Verify the Deployment Status To conclude, regularly rotating SSL certificates not only resolves operational issues but also ensures compliance with enterprise-level cybersecurity practices. By following the above steps, you can maintain the security and functionality of your Dynamics 365 Finance Operations cloud-hosted environments. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

Monitoring Job Queues: Setting Up Failure Notifications using Power Automate

A job queue lets users set up and manage background tasks that run automatically. These tasks can be scheduled to run on a recurring schedule. For a long time, users had a common problem in Business Central when a job queue failed, there was no alert or warning. You’d only notice something was wrong when a regular task didn’t run for a few days. Some people tried to fix this by setting up another job queue to watch and restart failed ones.But that didn’t always work, especially if an update happened at the same time. Now, Microsoft has finally added a built-in way to get alerts when a job queue fails. You can get notified either inside Business Central or by using Business Events. In this blog, we’ll see the process of leveraging the Business Events to set up notifications on job queue statuses. Configuration Search for “Assisted Setup” in Business Central’s global search. Scroll down till “Set up Job Queue notifications”. Click on Next. Add the additional users who need to be notified when the job queue fails along with the job creator (if required). Choose whether you want the notification to be in-product or using Business Events (and Power Automate). I’m choosing Business Events this time and then Next. Click on Finish. Then search for Job Queue Entries and from that list page open the Job Queue Entry card. If you are using Power Automate for the first time, then it will ask you for your consent. As an Administrator, if you want to give the consent for all of the Users at once or revoke the consent for all the Users then you can do so via the Privacy Notice statuses page. Then, go back to the Job Queue Entry card and click on Power Automate again.This time, you’ll get the option to create an automated flow. In the pop up screen, you’ll get the template for a job queue entry failure notification flow. Once you click on it, it’ll ask you to sign into Business Central as well as the outlook account that’ll be used to send the emails (if different from the current user). In the next screen, you can add additional users that need to be copied on the notifications. Click on “Create Flow” and you are done! Ideally, the setup should have worked at this point—but it didn’t. After some digging, I found out that the Power Automate flow was missing some key pieces. One of the actions didn’t have the environment configured, and another action (GetUrlV3) isn’t even available in the current (v25) version of Business Central. I came across two forum thread (1) (2) about this issue, but they had no clear solution. So, as a workaround, I created a Web Service based on the Job Queue Entries Log page and used the GetRecord action in Power Automate to fetch the required data. It wasn’t too hard for me since I knew what to look for; but for a new user, this would’ve been very confusing. Also, I noticed something odd: the action that picks the email address of the person to notify was pulling it from the Contact Email field on the User Card, instead of using User Setup, which would’ve made more sense. Anyway, after all that, here’s what the final solution looks like! To conclude, setting up job queue failure alerts with Business Events is a helpful new feature in Business Central. It lets you know when background tasks fail, so you don’t have to keep checking them manually. But as we saw, the setup doesn’t always work perfectly. Some parts were missing, and a few things didn’t make sense like where it pulls the email from. If you’re familiar with Power Automate, you can fix these issues with a few extra steps. For someone new, though, it might be a bit confusing. Hopefully, Microsoft will improve the setup in future updates. Until then, this blog should help you get the alerts working without too much trouble. If you need further assistance or have specific questions about your ERP setup, feel free to reach out for personalized guidance. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.

Share Story :

Project Contract Types in D365: Fixed Price vs Time & Material vs Milestone

When you run a project-based business-like in construction, IT, consulting, or engineering-how you charge your customers matters just as much as what you deliver. If you’re using Dynamics 365 Project Operations, you’ll need to decide how to bill your projects. Microsoft gives you three main contract types: Let’s break down what each of these means, when to use them, and how Dynamics 365 helps manage them. 1. Fixed Price – One Total Amount What is it? The customer pays a fixed amount for the full project or part of it, no matter how many hours or resources you actually use. When to use: What Dynamics 365 helps you do: Be careful:  Think of this like constructing a house for a fixed price. You get paid in stages, not by the number of hours worked. 2. Time & Material – Pay as You Go What is it? The customer pays based on the hours your team works and the cost of materials used. When to use: What Dynamics 365 helps you do: Be careful: This is like a taxi ride-you pay based on how far you go and how long it takes. 3. Milestone Billing – Pay for Key Deliverables What is it? You agree on certain key points (milestones) in the project. When those are completed, the customer is billed. When to use: What Dynamics 365 helps you do: Be careful:  It’s like paying an architect after each part of a building design is done—not for every hour they work. To conclude, choosing the right contract type helps you: When your billing matches your work style, profits become more predictable—and projects run smoother. Need Help Deciding? If you’re not sure which billing model is best for your business-or how to set it up in Dynamics 365 Project Operations-we’re here to help. Feel free to reach out. You can reach out to us at transform@cloudfronts.com.  Let’s find the right setup for your success.

Share Story :

Essential Power BI Tools for Power BI Projects

For growing businesses, while Power BI solutions are critical, development efficiency becomes equally important-without breaking the budget. As organizations scale their BI implementations, the need for advanced, free development tools increases, making smart tool selection essential to maintaining a competitive advantage Tool #1: DAX Studio – Your Free DAX Development Powerhouse What Makes DAX Studio Essential DAX Studio is one of the most critical free tools in any Power BI developer’s arsenal. It provides advanced DAX development and performance analysis capabilities that Power BI Desktop simply cannot match. Scenarios & Use Cases For a global oil & gas solutions provider with a presence in six countries, we used DAX Studio to analyze model size, reduce memory consumption, and optimize large datasets—preventing refresh failures in the Power BI Service. Tool #2: Tabular Editor 2 (Community Edition) – Free Model Management Tabular Editor 2 Community Edition provides model development capabilities that would cost thousands of dollars in other platforms-completely free. Key Use Cases We used Tabular Editor daily to efficiently manage measures, hide unused columns, standardize naming conventions, and apply best-practice model improvements across large datasets. This avoided repetitive manual work in Power BI Desktop for one of Europe’s largest laboratory equipment manufacturers. Tool #3: Power BI Helper (Community Edition) – Free Quality Analysis Power BI Helper Community Edition provides professional model analysis and documentation features that rival expensive enterprise tools. Key Use Cases For a Europe-based laboratory equipment manufacturer, we used Power BI Helper to scan reports and datasets for common issues-such as unused visuals, inactive relationships, missing descriptions, and inconsistent naming conventions-before promoting solutions to UAT and Production. Tool #4: Measure killer Measure Killer is a specialized tool designed to analyze Power BI models and identify unused or redundant DAX measures, helping improve model performance and maintainability. Key Use Cases For a technology consulting and cybersecurity services firm based in Houston, Texas (USA), specializing in modern digital transformation and enterprise security solutions, we used Measure Killer across Power BI engagements to quickly identify and remove unused measures and columns-ensuring optimized, maintainable models and improved report performance for enterprise clients. To conclude, I encourage you to start building your professional Power BI toolkit today-without any budget constraints. Identify your biggest daily frustration, whether it’s DAX debugging, measure management, or model optimization. Once you see how free tools can transform your workflow, you’ll naturally want to explore the complete toolkit. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

Browser-Level State Retention in Dynamics 365 CRM: Improving Performance & UX with Session Storage

Dynamics 365 model-driven apps are excellent at storing business data, but not every piece of information belongs in Dataverse. A common design folly is using Dataverse fields to store temporary UI state-things like selected views, filters, or user navigation preferences. While this works technically, it introduces unnecessary performance overhead and can create incorrect behavior in multi-user environments. In this blog, I’ll focus on browser-level retention of CRM UI data using “sessionStorage“, using subgrid view retention as a practical example for a technology consulting and cybersecurity services firm based in Houston, Texas, USA, specializing in modern digital transformation and enterprise security solutions. The Real Problem: UI State vs Business Data Let’s separate concerns clearly: Type Example Where it should live Business data Status, owner, amounts Dataverse UI state Selected view, filter, scroll position Browser Subgrid views fall squarely into the UI state category. Scenario: Subgrid View Resetting on Navigation Users reported the following behavior: This breaks user workflow, especially for records that users revisit frequently. Possible Solution: Persisting UI State in Dataverse (Original Approach) This would attempt to fix it by storing the selected subgrid view GUID in a Dataverse field on the parent record. How It Works Why this might look reasonable The Hidden Problems 1] Slower Form Execution 2] Data Model Pollution 3] Incorrect Multi-User Behavior 4] Scalability Issues In short, Dataverse was doing work it should never have been asked to do. Workaround to this Approach: Keep UI State in the Browser for that session, But practically: The selected subgrid view belongs to the user’s session, not the record. Once that boundary is respected, the solution becomes much cleaner. Practical Solution: Browser Session Storage (Improved Approach) Instead of persisting view selection in Dataverse, we store it locally in the browser using sessionStorage. sessionStorage is part of the Web Storage API, which provides a way to store key-value pairs in a web browser. Unlike localStorage, which persists data even after the browser is closed, sessionStorage is designed to store data only for the duration of a single session. This means that the data is available as long as the tab or window is open, and it is cleared when the tab or window is closed. Why Session Storage? How the Improved Solution Works 1. Store the View Locally on Subgrid Change 2. Restore the View on Form/Grid Load This ensures that when the user revisits the form, the subgrid opens exactly where they left off. Why This Approach Is Superior 1] Faster Execution 2] Correct User Experience 3] Clean Architecture 4] Zero Backend Impact When to Use Browser-Level Retention Use this pattern when: Examples: To conclude, not all data deserves to live in Dataverse. When you store UI state in the browser instead of the database, you gain: Subgrid view retention is just one example-but the principle applies broadly across Dynamics 365 customizations. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

Functional Cycle of Dynamics 365 Project Operations

Microsoft Dynamics 365 Project Operations (D365 PO) is an end-to-end solution designed for project-based organizations that need to manage the entire project lifecycle-from sales and estimation to delivery, time tracking, costing, and billing. It unifies capabilities from Project Management, Sales, Resource Planning, Time Tracking, and Financials into a single platform. This article outlines the complete functional cycle of D365 Project Operations, demonstrating how it supports project-based service delivery efficiently.  Full Cycle of Project Operations in D365 1. Lead to Opportunity The journey begins when a potential customer expresses interest in a service. 2. Quoting & Estimation Once requirements are understood, a Project Quote is created: 3. Project Contract & Setup After customer acceptance, a Project Contract is created: 4. Project Planning Project Managers build out the work breakdown structure (WBS): 5. Resource Management Once project tasks are defined, resources are assigned: 6. Time & Expense Management Assigned resources start delivering work and logging effort: 7. Costing & Financial Tracking Behind the scenes, every time or expense entry is tracked for: 8. Invoicing & Revenue Recognition Based on approved time, expenses, or milestones: Integration Capabilities D365 PO integrates with: Reporting & Analytics Out-of-the-box dashboards include: Dynamics 365 Project Operations enables organizations to manage the full project lifecycle—from opportunity creation to revenue recognition—without fragmentation between systems or teams. Key takeaways: For project-based organizations, D365 Project Operations is not just a project management tool-it is an operational backbone for scalable, profitable service delivery. To conclude, Dynamics 365 Project Operations is most effective when viewed not as a standalone application, but as a connected operating model for project-based organizations. When implemented correctly, it bridges the traditional gaps between sales promises, delivery execution, and financial outcomes-turning projects into predictable, scalable business assets. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

What Are Databricks Clusters? A Simple Guide for Beginners

A Databricks Cluster is a group of virtual machines (VMs) in the cloud that work together to process data using Apache Spark.It provides the memory, CPU, and compute power required to run your code efficiently. Clusters are used for: Each cluster has two main parts: Types of Clusters Databricks supports multiple cluster types, depending on how you want to work. Cluster Type Use Case Interactive (All-Purpose) Clusters Used for notebooks, ad-hoc queries, and development. Multiple users can attach their notebooks. Job Clusters Created automatically for scheduled jobs or production pipelines. Deleted after job completion. Single Node Clusters Used for small data exploration or lightweight development. No executors, only one driver node. How Databricks Clusters WorkWhen you execute a notebook cell, Databricks sends your code to the cluster.The cluster’s driver node divides your task into smaller jobs and distributes them to the executors.The executors process the data in parallel and send the results back to the driver.This distributed processing is what makes Databricks fast and scalable for handling massive datasets. Step-by-Step: Creating Your First Cluster Let’s create a cluster in your Databricks workspace. Step 1: Navigate to Compute In the Databricks sidebar, click Compute. You’ll see a list of existing clusters or an option to create a new one. Step 2: Create a New Cluster Click Create Compute in the top-right corner. Step 3: Configure Basic Settings Step 4: Select Node Type Choose the VM type based on your workload. For development, Standard_DS3_v2 or Standard_D4ds_v5 are cost-effective. Step 5: Auto-Termination Set the cluster to terminate after 10 or 20 minutes of inactivity. This prevents unnecessary cost when the cluster is idle. Step 6: Review and Create Click Create Compute. After a few minutes, your cluster will turn green, indicating it is ready to run code. Clusters in Unity Catalog-Enabled Workspaces If Unity Catalog is enabled in your workspace, there are a few additional configurations to note. Feature Standard Workspace Unity Catalog Workspace Access Mode Default is Single User. Must choose Shared, Single User, or No Isolation Shared. Data Access Managed by workspace permissions. Controlled through Catalog, Schema, and Table permissions. Data Hierarchy Database → Table Catalog → Schema → Table Example Query SELECT * FROM sales.customers; SELECT * FROM main.sales.customers; When you create a cluster with Unity Catalog, you will see a new Access Mode field in the configuration page. Choose “Shared” if multiple users need to access governed data under Unity Catalog. Managing Cluster Performance and CostClusters can become expensive if not managed properly. Follow these tips to optimize performance and cost: a. Use Auto-Termination to shut down idle clusters automatically.b. Choose the right VM size for your workload. Avoid oversizing.c. Use Job Clusters for production pipelines since they start and stop automatically.d. Leverage Autoscaling so Databricks can adjust the number of workers dynamically.e. Monitor with Ganglia metrics to identify performance bottlenecks. Common Cluster Issues and Fixes Issue Cause Fix Cluster stuck starting VM quota exceeded or region issue Change VM size or region. Slow performance Too few workers or data skew Increase worker count or repartition data. Access denied to data Missing storage credentials Use Databricks Secrets or Unity Catalog permissions. High cost Idle clusters running Enable auto-termination. Best Practices for Using Databricks Clusters1. Always attach your notebook to the correct cluster before running it.2. Use development, staging, and production clusters separately.3. Keep the cluster runtime version consistent across environments.4. Terminate unused clusters to reduce cost.5. If you use Unity Catalog, prefer Shared clusters for collaboration. To conclude, clusters are the heart of Databricks.They provide the compute power needed to process large-scale data efficiently. Without them, Databricks Notebooks and Jobs cannot run. Once you understand how clusters work, you will find it easier to manage costs, optimize performance, and build reliable data pipelines. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

Time Travel in Databricks: A Complete, Simple & Practical Guide

Databricks Time Travel is a powerful feature of Delta Lake that allows you to access older versions of your data. Whether you want to debug issues, recover deleted records, compare historical performance, or audit how data changed over time—Time Travel makes it effortless. It’s like having a complete rewind button for your tables, eliminating the fear of accidental updates or deletes. What is Time Travel? Time Travel enables you to query previous snapshots of a Delta table using either VERSION AS OF or TIMESTAMP AS OF. Delta automatically versions every transaction-UPDATE, MERGE, DELETE, INSERT. So, you can always go back to an earlier state without restoring backups manually. This versioning is stored in the Delta Log, making rewind operations efficient and reliable. Why Time Travel Matters (Use Cases) Debugging Pipelines: Quickly check what the data looked like before a bad job ran. Accidental Deletes: Recover records or entire tables. Audit & Compliance: Easily demonstrate how data has evolved. Root Cause Analysis: Compare two versions side by side. Model Re-training: Use historical datasets to retrain ML models. Data Quality Tracking: Validate when incorrect data first appeared. How Delta Stores Versions (Architecture Overview) Delta Lake stores metadata and version history inside the _delta_log folder. Each commit creates a new JSON or checkpoint Parquet file representing table state. When you run a query using Time Travel, Databricks does not rebuild the entire table. Instead, it directly reads the snapshot based on the transaction log. This architecture makes Time Travel extremely fast and scalable—even on very large datasets. Time Travel Commands Query older data: SELECT * FROM table VERSION AS OF 5; SELECT * FROM table TIMESTAMP AS OF ‘2024-11-20T10:00:00’; A. Example: DESCRIBE HISTORY Below is an example of using DESCRIBE HISTORY on a Delta table. B. Querying a Specific Version Here is how you can fetch an older snapshot using VERSION AS OF. C. Restoring a Table You can restore a Delta table to any older version using RESTORE TABLE. Retention Rules Delta keeps older versions based on two configs: `delta.logRetentionDuration` → How long commit logs are stored. `delta.deletedFileRetentionDuration`→ How long old data files are retained. By default, Databricks keeps 30 days of history. You can increase this if your compliance policy requires longer retention. Best Practices – Use Time Travel for debugging pipeline issues. – Increase retention for sensitive or audited datasets. – Use `DESCRIBE HISTORY` frequently during development. – Avoid unnecessarily large retention windows—they increase storage costs. – Use `RESTORE` carefully in production environments. To conclude, time Travel in Databricks brings reliability, auditability, and simplicity to modern data engineering. It protects teams from accidental data loss and gives full visibility into how datasets evolve. With just a few commands, you can analyze, compare, or restore historical data instantly making it one of the most useful features of Delta Lake. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

Bank & Payment Reconciliation in Microsoft Dynamics 365 Business Central

In any organization, reconciling bank and payment data is critical to maintaining accurate financial records and cash visibility. Microsoft Dynamics 365 Business Central offers robust tools for Bank Reconciliation and Payment Reconciliation Journals, helping businesses match bank statements with ledger entries, identify discrepancies, and streamline financial audits. In this article, I outline how I learned to perform these reconciliations efficiently using Business Central. Bank Reconciliation ensures that transactions recorded in the bank ledger match those on the actual bank statement. Steps: Benefits: Reconciled statements are stored and can be printed or exported for documentation. Set Up Payment Reconciliation Journals The Payment Reconciliation Journal is used to match customer/vendor payments against open invoices or entries. It supports automatic suggestions and match rules for fast processing. Configuration Steps: Setup ensures the journal is ready to load incoming payments and suggest matches automatically. Use the Payment Reconciliation Journal Once the setup is done, you can use the journal to reconcile incoming payments against customer/vendor invoices.  Daily Workflow: Features:  Business Value Feature Value Speed Auto-matching reduces reconciliation time Accuracy Eliminates manual errors and duplicate entries Audit Ready Clear audit trail for external and internal auditors Cash Flow Clarity Real-time visibility into paid/unpaid invoices We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

SEARCH BLOGS:

FOLLOW CLOUDFRONTS BLOG :


Secured By miniOrange